Rethinking Crowdsourcing Annotation: Partial Annotation With Salient Labels for Multilabel Aerial Image Classification

نویسندگان

چکیده

Annotated images are required for both supervised model training and evaluation in image classification. Manually annotating is arduous expensive, especially multi-labeled images. A recent trend conducting such laboursome annotation tasks through crowdsourcing, where annotated by volunteers or paid workers online (e.g., of Amazon Mechanical Turk) from scratch. However, the quality crowdsourcing annotations cannot be guaranteed, incompleteness incorrectness two major concerns annotations. To address concerns, we have a rethinking annotations: Our simple hypothesis that if annotators only partially annotate multi-label with salient labels they confident in, there will fewer errors spend less time on uncertain labels. As pleasant surprise, same budget, show classifier can outperform models fully method contributions 2-fold: An active learning way proposed to acquire images; novel Adaptive Temperature Associated Model (ATAM) specifically using partial We conduct experiments practical data, Open Street Map (OSM) dataset benchmark COCO 2014. When compared state-of-the-art classification methods trained images, ATAM achieve higher accuracy. The idea promising data annotation. code publicly available.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Deep Convolutional Ranking for Multilabel Image Annotation

Multilabel image annotation is one of the most important challenges in computer vision with many real-world applications. While existing work usually use conventional visual features for multilabel annotation, features based on Deep Neural Networks have shown potential to significantly boost performance. In this work, we propose to leverage the advantage of such features and analyze key compone...

متن کامل

Fuzzy Neighbor Voting for Automatic Image Annotation

With quick development of digital images and the availability of imaging tools, massive amounts of images are created. Therefore, efficient management and suitable retrieval, especially by computers, is one of themost challenging fields in image processing. Automatic image annotation (AIA) or refers to attaching words, keywords or comments to an image or to a selected part of it. In this paper,...

متن کامل

fuzzy neighbor voting for automatic image annotation

with quick development of digital images and the availability of imaging tools, massive amounts of images are created. therefore, efficient management and suitable retrieval, especially by computers, is one of themost challenging fields in image processing. automatic image annotation (aia) or refers to attaching words, keywords or comments to an image or to a selected part of it. in this paper,...

متن کامل

Image Annotation in Presence of Noisy Labels

Labels associated with social images are valuable source of information for tasks of image annotation, understanding and retrieval. These labels are often found to be noisy, mainly due to the collaborative tagging activities of users. Existing methods on annotation have been developed and verified on noise free labels of images. In this paper, we propose a novel and generic framework that explo...

متن کامل

Semantic Annotation of Finance Regulatory Text using Multilabel Classification

The Financial Industry is experiencing continual and complex regulatory changes, on a global scale, making regulatory compliance a critical challenge. Semi-automated solutions based on semantic technologies are called for to assist subject matter experts in identifying, clasifying and making sense of these regulatory changes. Previous work on fine-grained semantic annotation of regulatory text ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Geoscience and Remote Sensing

سال: 2022

ISSN: ['0196-2892', '1558-0644']

DOI: https://doi.org/10.1109/tgrs.2022.3191735